Goto

Collaborating Authors

 stiefel manifold


Mirror Descent on Riemannian Manifolds

Jiang, Jiaxin, Shi, Lei, Tan, Jiyuan

arXiv.org Machine Learning

Mirror Descent (MD) is a scalable first-order method widely used in large-scale optimization, with applications in image processing, policy optimization, and neural network training. This paper generalizes MD to optimization on Riemannian manifolds. In particular, we develop a Riemannian Mirror Descent (RMD) framework via reparameterization and further propose a stochastic variant of RMD. We also establish non-asymptotic convergence guarantees for both RMD and stochastic RMD. As an application to the Stiefel manifold, our RMD framework reduces to the Curvilinear Gradient Descent (CGD) method proposed in [26]. Moreover, when specializing the stochastic RMD framework to the Stiefel setting, we obtain a stochastic extension of CGD, which effectively addresses large-scale manifold optimization problems.






Robust low-rank training via approximate orthonormal constraints

Neural Information Processing Systems

By modeling robustness in terms of the condition number of the neural network, we argue that this loss of robustness is due to the exploding singular values of the low-rank weight matrices.


On Slicing Optimality for Mutual Information Ammar Fayad

Neural Information Processing Systems

P and Q, respectively, is tight in P (X Y). Hero, 2004; Ghourchian et al., 2017), we present the outline of our argument into three steps: K P (X) is tight iff the closure of K is sequentially compact in P ( X) with respect to the weak convergence. Remark 1. W e could proceed differently by imposing stronger assumptions using the following W e briefly discuss the outline of the proof for the sake of completeness. (Loeve, 2017). The argument here depends on two important facts: 1.